84 research outputs found

    Technology for social work education

    Get PDF
    The intention of this paper is to examine aspects of the role of information technology in social work education in relation to existing developments within an international context, conceptual issues concerning the application of CAL to the teaching of social work, and the implication of these issues for the development of integrated teaching modules in Interpersonal Skills and Research Methods, together with some of the practical issues encountered and solutions being adopted The context for the paper is joint work by the authors as members of the ProCare Project, a partnership between Southampton and Bournemouth Universities, and part of the UK Government‐funded Teaching and Learning Technology Programme (TLTP) in Higher Education. ProCare is developing courseware on Interpersonal Skills and on Research Methods for use in qualifying‐level Social Work and Nursing education. While the emphasis is on the social work version of the Interpersonal Skills module, limited reference is made to the nursing component and the differential approaches that proved necessary within the subject areas under development

    Baryon Dynamics, Dark Matter Substructure, and Galaxies

    Get PDF
    By comparing a collisionless cosmological N-body simulation (DM) to an SPH simulation with the same initial conditions, we investigate the correspondence between the dark matter subhalos produced by collisionless dynamics and the galaxies produced by dissipative gas dynamics in a dark matter background. When galaxies in the SPH simulation become satellites in larger groups, they retain local dark matter concentrations (SPH subhalos) whose mass is typically five times their baryonic mass. The more massive subhalos of the SPH simulation have corresponding subhalos of similar mass and position in the DM simulation; at lower masses, there is fairly good correspondence, but some DM subhalos are in different spatial positions and some suffer tidal stripping or disruption. The halo occupation statistics of DM subhalos -- the mean number of subhalos, pairs, and triples as a function of host halo mass -- are very similar to those of SPH subhalos and SPH galaxies. Gravity of the dissipative baryon component amplifies the density contrast of subhalos in the SPH simulation, making them more resistant to tidal disruption. Relative to SPH galaxies and SPH subhalos, the DM subhalo population is depleted in the densest regions of the most massive halos. The good agreement of halo occupation statistics between the DM subhalo and SPH galaxy populations leads to good agreement of their two-point correlation functions and higher order moments on large scales. The depletion of DM subhalos in dense regions depresses their clustering at R<1 Mpc/h. In these simulations, the "conversation" between dark matter and baryons is mostly one-way, with dark matter dynamics telling galaxies where to form and how to cluster, but the "back talk" of the baryons influences small scale clustering by enhancing the survival of substructure in the densest environments.Comment: 32 pages including 16 figs. Submitted to ApJ. PDF file with higher quality versions of Figs 2 and 3 available at http://www.astronomy.ohio-state.edu/~dhw/Preprints/subhalo.pd

    Accretion, feedback and galaxy bimodality: a comparison of the GalICS semi-analytic model and cosmological SPH simulations

    Get PDF
    We compare the galaxy population of an SPH simulation to those predicted by the GalICS semi-analytic model and a stripped down version without supernova and AGN feedback. The SPH simulation and the no-feedback GalICS model make similar predictions for the baryonic mass functions of galaxies and for the dependence of these mass functions on environment and redshift. The two methods also make similar predictions for the galaxy content of dark matter haloes as a function of halo mass and for the gas accretion history of galaxies. Both the SPH and no-feedback GalICS models predict a bimodal galaxy population at z=0. The "red'' sequence of gas poor, old galaxies is populated mainly by satellite systems while, contrary to observations, the central galaxies of massive haloes lie on the "blue'' star-forming sequence as a result of continuing hot gas accretion at late times. Furthermore, both models overpredict the observed baryonic mass function, especially at the high mass end. In the full GalICS model, supernova-driven outflows reduce the masses of low and intermediate mass galaxies by about a factor of two. AGN feedback suppresses gas cooling in large haloes, producing a sharp cut-off in the baryonic mass function and moving the central galaxies of these massive haloes to the red sequence. Our results imply that the observational failings of the SPH simulation and the no-feedback GalICS model are a consequence of missing input physics rather than computational inaccuracies, that truncating gas accretion by satellite galaxies automatically produces a bimodal galaxy distribution with a red sequence, but that explaining the red colours of the most massive galaxies requires a mechanism like AGN feedback that suppresses the accretion onto central galaxies in large haloes.Comment: 17 pages, 11 figures, submitted to MNRA

    LyMAS: Predicting Large-Scale Lyman-alpha Forest Statistics from the Dark Matter Density Field

    Full text link
    [abridged] We describe LyMAS (Ly-alpha Mass Association Scheme), a method of predicting clustering statistics in the Ly-alpha forest on large scales from moderate resolution simulations of the dark matter distribution, with calibration from high-resolution hydrodynamic simulations of smaller volumes. We use the "Horizon MareNostrum" simulation, a 50 Mpc/h comoving volume evolved with the adaptive mesh hydrodynamic code RAMSES, to compute the conditional probability distribution P(F_s|delta_s) of the transmitted flux F_s, smoothed (1-dimensionally) over the spectral resolution scale, on the dark matter density contrast delta_s, smoothed (3-dimensionally) over a similar scale. In this study we adopt the spectral resolution of the SDSS-III BOSS at z=2.5, and we find optimal results for a dark matter smoothing length sigma=0.3 Mpc/h (comoving). In extended form, LyMAS exactly reproduces both the 1-dimensional power spectrum and 1-point flux distribution of the hydro simulation spectra. Applied to the MareNostrum dark matter field, LyMAS accurately predicts the 2-point conditional flux distribution and flux correlation function of the full hydro simulation for transverse sightline separations as small as 1 Mpc/h, including redshift-space distortion effects. It is substantially more accurate than a deterministic density-flux mapping ("Fluctuating Gunn-Peterson Approximation"), often used for large volume simulations of the forest. With the MareNostrum calibration, we apply LyMAS to 1024^3 N-body simulations of a 300 Mpc/h and 1.0 Gpc/h cube to produce large, publicly available catalogs of mock BOSS spectra that probe a large comoving volume. LyMAS will be a powerful tool for interpreting 3-d Ly-alpha forest data, thereby transforming measurements from BOSS and other massive quasar absorption surveys into constraints on dark energy, dark matter, space geometry, and IGM physics.Comment: Accepted for publication in ApJ (minor corrections from the previous version). Catalogs of mock BOSS spectra and relevant data can be found at: http://www2.iap.fr/users/peirani/lymas/lymas.ht

    A Proposed Methodology to Characterize the Accuracy of Life Cycle Cost Estimates for DoD Programs

    Get PDF
    For decades, the DoD has employed numerous reporting and monitoring tools for characterizing the acquisition cost of its major programs. These tools have resulted in dozens of studies thoroughly documenting the magnitude and extent of DoD acquisition cost growth. Curiously, though, there have been extremely few studies regarding the behavior of the other cost component of a system\u27s life cycle: Operating and Support (O&S) costs. This is particularly strange considering that O&S costs tend to dominate the total life cycle cost (LCC) of a program, and that LCCs are widely regarded as the preferred metric for assessing actual program value. The upshot for not examining such costs is that the DoD has little knowledge of how LCC estimates behave over time, and virtually no insights regarding their accuracy. In recent years, however, enough quality LCC data has amassed to conduct a study to address these deficiencies. This paper describes a method for conducting such a study, and represents (to the authors’ knowledge) the first broad-based attempt to do so. The results not only promise insights into the nature of current LCC estimates, but also suggest the possibility of improving the accuracy of DoD LCC estimates via a stochastically-based model

    Augmenting the Space Domain Awareness Ground Architecture via Decision Analysis and Multi-Objective Optimization

    Get PDF
    Purpose — The US Government is challenged to maintain pace as the world’s de facto provider of space object cataloging data. Augmenting capabilities with nontraditional sensors present an expeditious and low-cost improvement. However, the large tradespace and unexplored system of systems performance requirements pose a challenge to successful capitalization. This paper aims to better define and assess the utility of augmentation via a multi-disiplinary study. Design/methodology/approach — Hypothetical telescope architectures are modeled and simulated on two separate days, then evaluated against performance measures and constraints using multi-objective optimization in a heuristic algorithm. Decision analysis and Pareto optimality identifies a set of high-performing architectures while preserving decision-maker design flexibility. Findings — Capacity, coverage and maximum time unobserved are recommended as key performance measures. A total of 187 out of 1017 architectures were identified as top performers. A total of 29% of the sensors considered are found in over 80% of the top architectures. Additional considerations further reduce the tradespace to 19 best choices which collect an average of 49–51 observations per space object with a 595–630 min average maximum time unobserved, providing redundant coverage of the Geosynchronous Orbit belt. This represents a three-fold increase in capacity and coverage and a 2 h (16%) decrease in the maximum time unobserved compared to the baseline government-only architecture as-modeled. Originality/value — This study validates the utility of an augmented network concept using a physics-based model and modern analytical techniques. It objectively responds to policy mandating cataloging improvements without relying solely on expert-derived point solutions

    Mixed Models with n>1 and Large Scale Structure constraints

    Get PDF
    Recent data on CBR anisotropies show a Doppler peak higher than expected in CDM cosmological models, if the spectral index n=1n=1. However, CDM and LCDM models with n>1 can hardly be consistent with LSS data. Mixed models, instead, whose transfer function is naturally steeper because of free--streaming in the hot component, may become consistent with data if n>1, when Omega_h is large. This is confirmed by our detailed analysis, extended both to models with a hot component whose momentum space distribution had a thermal origin (like massive neutrinos), and to models with a non--cold component arising from heavier particle decay. In this work we systematically search models which fulfill all constraints which can be implemented at the linear level. We find that a stringent linear constraint arises from fitting the extra-power parameter Gamma. Other significant constraints arise comparing the expected abundances of galaxy clusters and high-z systems with observational data. Keeping to models with Gamma \geq 0.13, a suitable part of the space parameter still allows up to \sim 30% of hot component (it is worth outlining that our stringent criteria allow only models with 0.10 \mincir Omega_h \mincir 0.16, if n \leq 1). We also outline that models with such large non--cold component would ease the solution of the so--called baryon catastrophe in galaxy clusters.Comment: 28 pages + 9 figures, uses elsart.sty, to be published in New Astronom

    An Advanced Computational Approach to System of Systems Analysis & Architecting Using Agent-Based Behavioral Model

    Get PDF
    A major challenge to the successful planning and evolution of an acknowledged System of Systems (SoS) is the current lack of understanding of the impact that the presence or absence of a set of constituent systems has on the overall SoS capability. Since the candidate elements of a SoS are fully functioning, stand-alone Systems in their own right, they have goals and objectives of their own to satisfy, some of which may compete with those of the overarching SoS. These system-level concerns drive decisions to participate (or not) in the SoS. Individual systems typically must be requested to join the SoS construct, and persuaded to interface and cooperate with other Systems to create the “new” capability of the proposed SoS. Current SoS evolution strategies lack a means for modeling the impact of decisions concerning participation or non-participation of any given set of systems on the overall capability of the SoS construct. Without this capability, it is difficult to optimize the SoS design. The goal of this research is to model the evolution of the architecture of an acknowledged SoS that accounts for the ability and willingness of constituent systems to support the SoS capability development. Since DoD Systems of Systems (SoS) development efforts do not typically follow the normal program acquisition process described in DoDI 5000.02, the Wave Model proposed by Dahmann and Rebovich is used as the basis for this research on SoS capability evolution. The Wave Process Model provides a framework for an agent-based modeling methodology, which is used to abstract the nonutopian behavioral aspects of the constituent systems and their interactions with the SoS. In particular, the research focuses on the impact of individual system behavior on the SoS capability and architecture evolution processes. A generic agent-based model (ABM) skeleton structure is developed to provide an Acknowledged SoS manager a decision making tool in negotiating of SOS architectures during the wave model cycles. The model provides an environment to plug in multiple SoS meta-architecture generation multiple criteria optimization models based on both gradient and non-gradient descent optimization procedures. Three types of individual system optimization models represent different behaviors of systems agents, namely; selfish, opportunistic and cooperative, are developed as plug in models. ABM has a plug in capability to incorporate domain-specific negotiation modes and a fuzzy associative memory (FAM) to evaluate candidate architectures for simulating SoS creation and evolution. The model evaluates the capability of the evolving SoS architecture with respect to four attributes: performance, affordability, flexibility and robustness. In the second phase of the project, the team will continue with the development of an evolutionary strategies-based multi-objective mathematical model for creating an initial SoS meta architecture to start the negotiation at each wave. A basic generic structure will be defined for the fuzzy assessor math model that will be used to evaluate SoS meta architectures and domain dependent parameters pertaining to system of systems analysis and architecting through Agent Based Modeling. The work will be conducted in consideration of the national priorities, funding and threat assessment being provided by the environment developed for delivery at end of December 2013

    A New Dark Matter Candidate: Non-thermal Sterile Neutrinos

    Get PDF
    We propose a new and unique dark matter candidate: ∌100\sim 100 eV to ∌10\sim 10 keV sterile neutrinos produced via lepton number-driven resonant MSW (Mikheyev-Smirnov-Wolfenstein) conversion of active neutrinos. The requisite lepton number asymmetries in any of the active neutrino flavors range from 10−3^{-3} to 10−1^{-1} of the photon number - well within primordial nucleosynthesis bounds. The unique feature here is that the adiabaticity condition of the resonance strongly favors the production of lower energy sterile neutrinos. The resulting non-thermal (cold) energy spectrum can cause these sterile neutrinos to revert to non-relativistic kinematics at an early epoch, so that free-streaming lengths at or below the dwarf galaxy scale are possible. Therefore, the main problem associated with light neutrino dark matter candidates can be circumvented in our model.Comment: Latex 11 pages + 1 figur

    Ice nucleating particles carried from below a phytoplankton bloom to the arctic atmosphere

    Get PDF
    Author Posting. © American Geophysical Union, 2019. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Geophysical Research Letters 46(14), (2019): 8572-8581, doi: 10.1029/2019GL083039.As Arctic temperatures rise at twice the global rate, sea ice is diminishing more quickly than models can predict. Processes that dictate Arctic cloud formation and impacts on the atmospheric energy budget are poorly understood, yet crucial for evaluating the rapidly changing Arctic. In parallel, warmer temperatures afford conditions favorable for productivity of microorganisms that can effectively serve as ice nucleating particles (INPs). Yet the sources of marine biologically derived INPs remain largely unknown due to limited observations. Here we show, for the first time, how biologically derived INPs were likely transported hundreds of kilometers from deep Bering Strait waters and upwelled to the Arctic Ocean surface to become airborne, a process dependent upon a summertime phytoplankton bloom, bacterial respiration, ocean dynamics, and wind‐driven mixing. Given projected enhancement in marine productivity, combined oceanic and atmospheric transport mechanisms may play a crucial role in provision of INPs from blooms to the Arctic atmosphere.We sincerely thank the U.S. Coast Guard and crew of the Healy for assistance with equipment installation and guidance, operation of the underway and CTD systems, and general operation of the vessel during transit and at targeted sampling stations. We would also like to thank Allan Bertram, Meng Si, Victoria Irish, and Benjamin Murray for providing INP data from their previous studies. J. M. C., R. P., P. L., L. T., and E. B. were funded by the National Oceanic and Atmospheric Administration (NOAA)’s Arctic Research Program. J. C. was supported by the NOAA Experiential Research & Training Opportunities (NERTO) program. T. A. and N. C. were supported through the NOAA Earnest F. Hollings Scholarship program. A. P. was funded by the National Science Foundation under Grant PLR‐1303617. Russel C. Schnell and Michael Spall are acknowledged for insightful discussions during data analysis and interpretation. There are no financial conflicts of interest for any author. INP data are available in the supporting information, while remaining DBO‐NCIS data presented in the manuscript are available online (at https://www2.whoi.edu/site/dboncis/).2020-01-1
    • 

    corecore